Deconvolutional Density Network: Modeling Free-Form Conditional Distributions
نویسندگان
چکیده
Conditional density estimation (CDE) is the task of estimating probability an event conditioned on some inputs. A neural network (NN) can also be used to compute output distribution for continuous-domain, which viewed as extension regression task. Nevertheless, it difficult explicitly approximate a without knowing information its general form priori. In order fit arbitrary conditional distribution, discretizing continuous domain into bins effective strategy, long we have sufficiently narrow and very large data. However, collecting enough data often hard reach falls far short that ideal in many circumstances, especially multivariate CDE curse dimensionality. this paper, demonstrate benefits modeling free-form distributions using deconvolution-based net framework, coping with deficiency problems discretization. It has advantage being flexible but takes hierarchical smoothness offered by deconvolution layers. We compare our method number other density-estimation approaches show Deconvolutional Density Network (DDN) outperforms competing methods univariate tasks. The code DDN available at https://github.com/NBICLAB/DDN.
منابع مشابه
Neural network for estimating conditional distributions
Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within statistics, decision theory, and signal processing are suggested, and a numerical example illustrating the...
متن کاملStacked Deconvolutional Network for Semantic Segmentation
Recent progress in semantic segmentation has been driven by improving the spatial resolution under Fully Convolutional Networks (FCNs). To address this problem, we propose a Stacked Deconvolutional Network (SDN) for semantic segmentation. In SDN, multiple shallow deconvolutional networks, which are called as SDN units, are stacked one by one to integrate contextual information and guarantee the...
متن کاملModeling Conditional Distributions of Continuous Variables in Bayesian Networks
The MTE (mixture of truncated exponentials) model was introduced as a general solution to the problem of specifying conditional distributions for continuous variables in Bayesian networks, especially as an alternative to discretization. In this paper we compare the behavior of two different approaches for constructing conditional MTE models in an example taken from Finance, which is a domain we...
متن کاملSequence Modeling with Mixtures of Conditional Maximum Entropy Distributions
We present a novel approach to modeling sequences using mixtures of conditional maximum entropy distributions. Our method generalizes the mixture of first-order Markov models by including the “long-term” dependencies in model components. The “long-term” dependencies are represented by the frequently used in the natural language processing (NLP) domain probabilistic triggers or rules (such as “A...
متن کاملModeling of Distributions with Neural Approximation of Conditional Quantiles
We propose a method of recurrent estimation of conditional quantiles stemming from stochastic approximation. The method employs a sigmoidal neural network and specialized training algorithm to approximate the conditional quantiles. The approach may by used in a wide range of fields, in partricular in econometrics, medicine, data mining, and modeling.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2022
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v36i6.20567